Partial Occam's Razor and Its Applications
نویسندگان
چکیده
We introduce the notion of \partial Occam algorithm". A partial Oc-cam algorithm produces a succinct hypothesis that is partially consistent with given examples, where the proportion of consistent examples is a bit more than half. By using this new notion, we propose one approach for obtaining a PAC learning algorithm. First, as shown in this paper, a partial Occam algorithm is equivalent to a weak PAC learning algorithm. Then by using boosting techniques of Schapire or Freund, we can obtain an ordinary PAC learning algorithm from this weak PAC learning algorithm. We demonstrate with some examples that some improvement is possible by this approach, in particular in the hypothesis size. First, we obtain a (non-proper) PAC learning algorithm for k-DNF, which has similar sample complexity as Littlestone's Winnow, but produces hypothesis of size polynomial in d and log k for a k-DNF target with n variables and d terms (Cf. The hypothesis size of Winnow is O(n k)). Next we show that 1-decision lists of length d with n variables are (non-proper) PAC learnable by using O 1 " log 1 + 16 d log n(d + log log n) 2 examples within polynomial time w.r.t. n, 2 d , 1=", and log 1==. Again, we obtain a sample complexity similar to Winnow for the same problem but with a much smaller hypothesis size. We also show that our algorithms are robust against random classiication noise.
منابع مشابه
L Partial Occam's Razor and Its Applications Partial Occam's Razor and Its Applications
We introduce the notion of \partial Occam algorithm". A partial Occam algorithm produces a succinct hypothesis that is partially consistent with given examples, where the proportion of consistent examples is a bit more than half. By using this new notion, we propose one approach for obtaining a PAC learning algorithm. First, as shown in this paper, a partial Occam algorithm is equivalent to a w...
متن کاملSharpening Occam's Razor
We provide a new representation-independent formulation of Occam’s razor theorem, based on Kolmogorov complexity. This new formulation allows us to: (i) Obtain better sample complexity than both length-based [4] and VC-based [3] versions of Occam’s razor theorem, in many applications; and (ii) Achieve a sharper reverse of Occam’s razor theorem than that of [5]. Specifically, we weaken the assum...
متن کاملGenetic Programming with Guaranteed Quality
When using genetic programming (GP) or other techniques that try to approximate unknown functions, the principle of Occam's razor is often applied: nd the simplest function that explains the given data, as it is assumed to be the best approximation for the unknown function. Using a well-known result from learning theory, it is shown in this paper, how Occam's razor can help GP in nding function...
متن کاملOccam's Two Razors: the Sharp and the Blunt Occam's Two Razors Theoretical Arguments for the Second Razor the Pac-learning Argument
Occam's razor has been the subject of much controversy. This paper argues that this is partly because it has been interpreted in two quite different ways, the rst of which (simplicity is a goal in itself) is essentially correct, while the second (simplicity leads to greater accuracy) is not. The paper reviews the large variety of theoretical arguments and empirical evidence for and against the ...
متن کاملExtending Occam's Razor
Occam's Razor states that, all other things being equal, the simpler of two possible hypotheses is to be preferred. A quanti ed version of Occam's Razor has been proven for the PAC model of learning, giving sample-complexity bounds for learning using what Blumer et al. call an Occam algorithm [1]. We prove an analog of this result for Haussler's more general learning model, which encompasses le...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Inf. Process. Lett.
دوره 64 شماره
صفحات -
تاریخ انتشار 1997